21 research outputs found

    On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence

    Full text link
    In this paper we present a new coherence-based performance guarantee for the Orthogonal Matching Pursuit (OMP) algorithm. A lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise is derived. Compared to previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a closer match to empirically obtained results of the OMP algorithm.Comment: Submitted to IEEE Signal Processing Letters. arXiv admin note: substantial text overlap with arXiv:1608.0038

    Sparse representation of visual data for compression and compressed sensing

    No full text
    The ongoing advances in computational photography have introduced a range of new imaging techniques for capturing multidimensional visual data such as light fields, BRDFs, BTFs, and more. A key challenge inherent to such imaging techniques is the large amount of high dimensional visual data that is produced, often requiring GBs, or even TBs, of storage. Moreover, the utilization of these datasets in real time applications poses many difficulties due to the large memory footprint. Furthermore, the acquisition of large-scale visual data is very challenging and expensive in most cases. This thesis makes several contributions with regards to acquisition, compression, and real time rendering of high dimensional visual data in computer graphics and imaging applications. Contributions of this thesis reside on the strong foundation of sparse representations. Numerous applications are presented that utilize sparse representations for compression and compressed sensing of visual data. Specifically, we present a single sensor light field camera design, a compressive rendering method, a real time precomputed photorealistic rendering technique, light field (video) compression and real time rendering, compressive BRDF capture, and more. Another key contribution of this thesis is a general framework for compression and compressed sensing of visual data, regardless of the dimensionality. As a result, any type of discrete visual data with arbitrary dimensionality can be captured, compressed, and rendered in real time. This thesis makes two theoretical contributions. In particular, uniqueness conditions for recovering a sparse signal under an ensemble of multidimensional dictionaries is presented. The theoretical results discussed here are useful for designing efficient capturing devices for multidimensional visual data. Moreover, we derive the probability of successful recovery of a noisy sparse signal using OMP, one of the most widely used algorithms for solving compressed sensing problems

    Surface Light Field Generation, Compression and Rendering

    No full text
    We present a framework for generating, compressing and rendering of SurfaceLight Field (SLF) data. Our method is based on radiance data generated usingphysically based rendering methods. Thus the SLF data is generated directlyinstead of re-sampling digital photographs. Our SLF representation decouplesspatial resolution from geometric complexity. We achieve this by uniform samplingof spatial dimension of the SLF function. For compression, we use ClusteredPrincipal Component Analysis (CPCA). The SLF matrix is first clustered to lowfrequency groups of points across all directions. Then we apply PCA to eachcluster. The clustering ensures that the within-cluster frequency of data is low,allowing for projection using a few principal components. Finally we reconstructthe CPCA encoded data using an efficient rendering algorithm. Our reconstructiontechnique ensures seamless reconstruction of discrete SLF data. We applied ourrendering method for fast, high quality off-line rendering and real-time illuminationof static scenes. The proposed framework is not limited to complexity of materialsor light sources, enabling us to render high quality images describing the full globalillumination in a scene

    ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES

    No full text
    In this paper we consider the problem of nonlocal image completion from random measurements and using an ensemble of dictionaries. Utilizing recent advances in the field of compressed sensing, we derive conditions under which one can uniquely recover an incomplete image with overwhelming probability. The theoretical results are complemented by numerical simulations using various ensembles of analytical and training-based dictionaries

    Compressive Image Reconstruction in Reduced Union of Subspaces

    No full text
    We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light-fields. The algorithm relies on a learning-based basis representation. We train an ensemble of intrinsically two-dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K-SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light-fields). We show that our method outperforms state-of-the-art algorithms in computer graphics and image processing literature.VP

    Multi-Shot Single Sensor Light Field Camera Using a Color Coded Mask

    Get PDF
    International audienceWe present a compressed sensing framework for reconstructing the full light field of a scene captured using a single-sensor consumer camera. To achieve this, we use a color coded mask in front of the camera sensor. To further enhance the reconstruction quality, we propose to utilize multiple shots by moving the mask or the sensor randomly. The compressed sensing framework relies on a training based dictionary over a light field data set. Numerical simulations show significant improvements in reconstruction quality over a similar coded aperture system for light field capture

    A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos

    No full text
    In this article we present a novel dictionary learning framework designed for compression and sampling of light fields and light field videos. Unlike previous methods, where a single dictionary with one-dimensional atoms is learned, we propose to train a Multidimensional Dictionary Ensemble (MDE). It is shown that learning an ensemble in the native dimensionality of the data promotes sparsity, hence increasing the compression ratio and sampling efficiency. To make maximum use of correlations within the light field data sets, we also introduce a novel nonlocal pre-clustering approach that constructs an Aggregate MDE (AMDE). The pre-clustering not only improves the image quality but also reduces the training time by an order of magnitude in most cases. The decoding algorithm supports efficient local reconstruction of the compressed data, which enables efficient real-time playback of high-resolution light field videos. Moreover, we discuss the application of AMDE for compressed sensing. A theoretical analysis is presented that indicates the required conditions for exact recovery of point-sampled light fields that are sparse under AMDE. The analysis provides guidelines for designing efficient compressive light field cameras. We use various synthetic and natural light field and light field video data sets to demonstrate the utility of our approach in comparison with the state-of-the-art learning-based dictionaries, as well as established analytical dictionaries

    A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence

    No full text
    In this paper, we present a new performance guarantee for the orthogonal matching pursuit (OMP) algorithm. We use mutual coherence as a metric for determining the suitability of an arbitrary overcomplete dictionary for exact recovery. Specifically, a lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise and an upper bound for the mean square error is derived. Compared to the previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a much closer correlation to empirical results of OMP

    Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes

    No full text
    We present an algorithm for compression and real-time rendering of surface light fields (SLF) encoding the visual appearance of objects in static scenes with high frequency variations. We apply a non-local clustering in order to exploit spatial coherence in the SLFdata. To efficiently encode the data in each cluster, we introducea learning based approach, Clustered Exemplar Orthogonal Bases(CEOB), which trains a compact dictionary of orthogonal basispairs, enabling efficient sparse projection of the SLF data. In ad-dition, we discuss the application of the traditional Clustered Principal Component Analysis (CPCA) on SLF data, and show that inmost cases, CEOB outperforms CPCA, K-SVD and spherical harmonics in terms of memory footprint, rendering performance andreconstruction quality. Our method enables efficient reconstructionand real-time rendering of scenes with complex materials and lightsources, not possible to render in real-time using previous methods.VP
    corecore